684 research outputs found

    Perception Versus Punishment in Cybercrime

    Get PDF

    Experimental Measurement of Attitudes Regarding Cybercrime

    Get PDF
    We conducted six between-subjects survey experiments to examine how judgments of cyber-crime vary as a function of characteristics of the crime. The experiments presented vignettes that described a fictional cybercrime in which someone broke into an organization’s network and downloaded data records. In each experiment, we manipulated the vignettes according to one dimension per experiment: type of data, scope, motivation, the organization’s co-responsibility for the crime, consequences, and context. Participants were U.S. residents recruited via Amazon Mechanical Turk. We find that scope (the number of records downloaded) and the attacker’s motivation had significant effects on the perceived seriousness of the crime. Participants also recommended harsher punishments when the monetary costs of the cybercrime were higher. Furthermore, participants considered cybercrimes committed by activists to be significantly less blameworthy, and deserving of significantly lighter sentences, than cybercrimes committed for profit—contrary to the position sometimes taken by U.S. prosecutors.

    Inferring Species Trees Directly from Biallelic Genetic Markers: Bypassing Gene Trees in a Full Coalescent Analysis

    Get PDF
    The multi-species coalescent provides an elegant theoretical framework for estimating species trees and species demographics from genetic markers. Practical applications of the multi-species coalescent model are, however, limited by the need to integrate or sample over all gene trees possible for each genetic marker. Here we describe a polynomial-time algorithm that computes the likelihood of a species tree directly from the markers under a finite-sites model of mutation, effectively integrating over all possible gene trees. The method applies to independent (unlinked) biallelic markers such as well-spaced single nucleotide polymorphisms (SNPs), and we have implemented it in SNAPP, a Markov chain Monte-Carlo sampler for inferring species trees, divergence dates, and population sizes. We report results from simulation experiments and from an analysis of 1997 amplified fragment length polymorphism (AFLP) loci in 69 individuals sampled from six species of {\em Ourisia} (New Zealand native foxglove)

    Molecular and Genetic Evidence for a Virus-Encoded Glycosyltransferase Involved in Protein Glycosylation

    Get PDF
    AbstractThe major capsid protein, Vp54, of chlorella virus PBCV-1 is a glycoprotein that contains either one glycan of ∼30 sugar residues or two similar glycans of ∼15 residues. Previous analysis of PBCV-1 antigenic mutants that contained altered Vp54 glycans led to the conclusion that unlike other glycoprotein-containing viruses, most, if not all, of the enzymes involved in the synthesis of the Vp54 glycan are probably encoded by PBCV-1 (I.-N. Wang et al., 1993, Proc. Natl. Acad. Sci. USA 90, 3840–3844). In this report we used molecular and genetic approaches to begin to identify these virus genes. Comparing the deduced amino acid sequences of the putative 375 PBCV-1 protein-encoding genes to databases identified seven potential glycosyltransferases. One gene, designated a64r, encodes a 638-amino-acid protein that has four motifs conserved in “Fringe type” glycosyltransferases. Analysis of 13 PBCV-1 antigenic mutants revealed mutations in a64r that correlated with a specific antigenic variation. Dual-infection experiments with different antigenic mutants indicated that viruses that contained wild-type a64r could complement and recombine with viruses that contained mutant a64r to form wild-type virus. Therefore, we conclude that a64r encodes a glycosyltransferase involved in synthesizing the Vp54 glycan. This is the first report of a virus-encoded glycosyltransferase involved in protein glycosylation

    The Environment of Warm-Season Elevated Thunderstorms Associated with Heavy Rainfall Over the Central United States

    Get PDF
    Twenty-one warm-season heavy-rainfall events in the central United States produced by mesoscale convective systems (MCSs) that developed above and north of a surface boundary are examined to define the environmental conditions and physical processes associated with these phenomena. Storm-relative composites of numerous kinematic and thermodynamic fields are computed by centering on the heavy-rain-producing region of the parent elevated MCS. Results reveal that the heavy-rain region of elevated MCSs is located on average about 160 km north of a quasi-stationary frontal zone, in a region of low-level moisture convergence that is elongated westward on the cool side of the boundary. The MCS is located within the left-exit region of a south-southwesterly lowlevel jet (LLJ) and the right-entrance region of an upper-level jet positioned well north of the MCS site. The LLJ is directed toward a divergence maximum at 250 hPa that is coincident with the MCS site. Near-surface winds are light and from the southeast within a boundary layer that is statically stable and cool. Winds veer considerably with height (about 1408) from 850 to 250 hPa, a layer associated with warm-air advection. The MCS is located in a maximum of positive equivalent potential temperature ue advection, moisture convergence, and positive thermal advection at 850 hPa. Composite fields at 500 hPa show that the MCS forms in a region of weak anticyclonic curvature in the height field with marginal positive vorticity advection. Even though surfacebased stability fields indicate stable low-level air, there is a layer of convectively unstable air with maximumu e CAPE values of more than 1000 J kg21 in the vicinity of the MCS site and higher values upstream. Maximumu e convective inhibition (CIN) values over the MCS centroid site are small (less than 40 J kg21) while to the south convection is limited by large values of CIN (greater than 60 J kg21). Surface-to-500-hPa composite average relative humidity values are about 70%, and composite precipitable water values average about 3.18 cm (1.25 in.). The representativeness of the composite analysis is also examined. Last, a schematic conceptual model based upon the composite fields is presented that depicts the typical environment favorable for the development of elevated thunderstorms that lead to heavy rainfall

    Disagreeable Privacy Policies: Mismatches between Meaning and Users’ Understanding

    Get PDF
    Privacy policies are verbose, difficult to understand, take too long to read, and may be the least-read items on most websites even as users express growing concerns about information collection practices. For all their faults, though, privacy policies remain the single most important source of information for users to attempt to learn how companies collect, use, and share data. Likewise, these policies form the basis for the self-regulatory notice and choice framework that is designed and promoted as a replacement for regulation. The underlying value and legitimacy of notice and choice depends, however, on the ability of users to understand privacy policies. This paper investigates the differences in interpretation among expert, knowledgeable, and typical users and explores whether those groups can understand the practices described in privacy policies at a level sufficient to support rational decision-making. The paper seeks to fill an important gap in the understanding of privacy policies through primary research on user interpretation and to inform the development of technologies combining natural language processing, machine learning and crowdsourcing for policy interpretation and summarization. For this research, we recruited a group of law and public policy graduate students at Fordham University, Carnegie Mellon University, and the University of Pittsburgh (“knowledgeable users”) and presented these law and policy researchers with a set of privacy policies from companies in the e-commerce and news & entertainment industries. We asked them nine basic questions about the policies’ statements regarding data collection, data use, and retention. We then presented the same set of policies to a group of privacy experts and to a group of non-expert users. The findings show areas of common understanding across all groups for certain data collection and deletion practices, but also demonstrate very important discrepancies in the interpretation of privacy policy language, particularly with respect to data sharing. The discordant interpretations arose both within groups and between the experts and the two other groups. The presence of these significant discrepancies has critical implications. First, the common understandings of some attributes of described data practices mean that semi-automated extraction of meaning from website privacy policies may be able to assist typical users and improve the effectiveness of notice by conveying the true meaning to users. However, the disagreements among experts and disagreement between experts and the other groups reflect that ambiguous wording in typical privacy policies undermines the ability of privacy policies to effectively convey notice of data practices to the general public. The results of this research will, consequently, have significant policy implications for the construction of the notice and choice framework and for the US reliance on this approach. The gap in interpretation indicates that privacy policies may be misleading the general public and that those policies could be considered legally unfair and deceptive. And, where websites are not effectively conveying privacy policies to consumers in a way that a “reasonable person” could, in fact, understand the policies, “notice and choice” fails as a framework. Such a failure has broad international implications since websites extend their reach beyond the United States

    Similarity thresholds used in DNA sequence assembly from short reads can reduce the comparability of population histories across species

    Get PDF
    Comparing inferences among datasets generated using short read sequencing may provide insight into the concerted impacts of divergence, gene flow and selection across organisms, but comparisons are complicated by biases introduced during dataset assembly. Sequence similarity thresholds allow the de novo assembly of short reads into clusters of alleles representing different loci, but the resulting datasets are sensitive to both the similarity threshold used and to the variation naturally present in the organism under study. Thresholds that require high sequence similarity among reads for assembly (stringent thresholds) as well as highly variable species may result in datasets in which divergent alleles are lost or divided into separate loci (‘over-splitting’), whereas liberal thresholds increase the risk of paralogous loci being combined into a single locus (‘under-splitting’). Comparisons among datasets or species are therefore potentially biased if different similarity thresholds are applied or if the species differ in levels of within-lineage genetic variation. We examine the impact of a range of similarity thresholds on assembly of empirical short read datasets from populations of four different non-model bird lineages (species or species pairs) with different levels of genetic divergence. We find that, in all species, stringent similarity thresholds result in fewer alleles per locus than more liberal thresholds, which appears to be the result of high levels of over-splitting. The frequency of putative under-splitting, conversely, is low at all thresholds. Inferred genetic distances between individuals, gene tree depths, and estimates of the ancestral mutation-scaled effective population size (θ) differ depending upon the similarity threshold applied. Relative differences in inferences across species differ even when the same threshold is applied, but may be dramatically different when datasets assembled under different thresholds are compared. These differences not only complicate comparisons across species, but also preclude the application of standard mutation rates for parameter calibration. We suggest some best practices for assembling short read data to maximize comparability, such as using more liberal thresholds and examining the impact of different thresholds on each dataset
    • …
    corecore